Relative Expected Instantaneous Loss Bounds
نویسندگان
چکیده
In the literature a number of relative loss bounds have been shown for on-line learning algorithms. Here the relative loss is the total loss of the on-line algorithm in all trials minus the total loss of the best comparator that is chosen off-line. However, for many applications instantaneous loss bounds are more interesting where the learner first sees a batch of examples and then uses these examples to make a prediction on a new instance. We show relative expected instantaneous loss bounds for the case when the examples are i.i.d. with an unknown distribution. We bound the expected loss of the algorithm on the last example minus the expected loss of best comparator on a random example. In particular, we study linear regression and density estimation problems and show how the leave-oneout loss can be used to prove instantaneous loss bound for these cases. For linear regression we use an algorithm that is similar to a new on-line learning algorithm developed by Vovk. Recently a large number of relative total loss bounds have been shown that have the form O(lnT ), where T is the number of trials/examples. Standard conversions of on-line algorithms to batch algorithms result in relative expected instantaneous loss bounds of the form O( lnT T ). Our methods lead to O( 1 T ) bounds. We also prove lower bounds that show that our upper bound on the relative expected instantaneous loss for Gaussian density estimation is optimal. In the case of linear regression we can show that our bounds are tight within a factor of two.
منابع مشابه
Bounds on Individual Risk for Log-loss Predictors
In sequential prediction with log-loss as well as density estimation with risk measured by KL divergence, one is often interested in the expected instantaneous loss, or, equivalently, the individual risk at a given fixed sample size n. For Bayesian prediction and estimation methods, it is often easy to obtain bounds on the cumulative risk. Such results are based on bounding the individual seque...
متن کاملEstimating a Bounded Normal Mean Relative to Squared Error Loss Function
Let be a random sample from a normal distribution with unknown mean and known variance The usual estimator of the mean, i.e., sample mean is the maximum likelihood estimator which under squared error loss function is minimax and admissible estimator. In many practical situations, is known in advance to lie in an interval, say for some In this case, the maximum likelihood estimator...
متن کاملTight Lower Bound on the Probability of a Binomial Exceeding its Expectation
We give the proof of a tight lower bound on the probability that a binomial random variable exceeds its expected value. The inequality plays an important role in a variety of contexts, including the analysis of relative deviation bounds in learning theory and generalization bounds for unbounded loss functions.
متن کاملSome properties of the parametric relative operator entropy
The notion of entropy was introduced by Clausius in 1850, and some of the main steps towards the consolidation of the concept were taken by Boltzmann and Gibbs. Since then several extensions and reformulations have been developed in various disciplines with motivations and applications in different subjects, such as statistical mechanics, information theory, and dynamical systems. Fujii and Kam...
متن کاملLinear Hinge Loss and Average Margin
We describe a unifying method for proving relative loss bounds for online linear threshold classification algorithms, such as the Perceptron and the Winnow algorithms. For classification problems the discrete loss is used, i.e., the total number of prediction mistakes. We introduce a continuous loss function, called the “linear hinge loss”, that can be employed to derive the updates of the algo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Comput. Syst. Sci.
دوره 64 شماره
صفحات -
تاریخ انتشار 2000